13 research outputs found

    HENA, heterogeneous network-based data set for Alzheimer's disease.

    Get PDF
    Alzheimer's disease and other types of dementia are the top cause for disabilities in later life and various types of experiments have been performed to understand the underlying mechanisms of the disease with the aim of coming up with potential drug targets. These experiments have been carried out by scientists working in different domains such as proteomics, molecular biology, clinical diagnostics and genomics. The results of such experiments are stored in the databases designed for collecting data of similar types. However, in order to get a systematic view of the disease from these independent but complementary data sets, it is necessary to combine them. In this study we describe a heterogeneous network-based data set for Alzheimer's disease (HENA). Additionally, we demonstrate the application of state-of-the-art graph convolutional networks, i.e. deep learning methods for the analysis of such large heterogeneous biological data sets. We expect HENA to allow scientists to explore and analyze their own results in the broader context of Alzheimer's disease research

    Predictive Process Monitoring Methods: Which One Suits Me Best?

    Full text link
    Predictive process monitoring has recently gained traction in academia and is maturing also in companies. However, with the growing body of research, it might be daunting for companies to navigate in this domain in order to find, provided certain data, what can be predicted and what methods to use. The main objective of this paper is developing a value-driven framework for classifying existing work on predictive process monitoring. This objective is achieved by systematically identifying, categorizing, and analyzing existing approaches for predictive process monitoring. The review is then used to develop a value-driven framework that can support organizations to navigate in the predictive process monitoring field and help them to find value and exploit the opportunities enabled by these analysis techniques

    Predicting critical behaviors in business process executions: when evidence counts

    No full text
    Organizations need to monitor the execution of their processes to ensure they comply with a set of constraints derived, e.g., by internal managerial choices or by external legal requirements. However, preventive systems that enforce users to adhere to the prescribed behavior are often too rigid for real-world processes, where users might need to deviate to react to unpredictable circumstances. An effective strategy for reducing the risks associated with those deviations is to predict whether undesired behaviors will occur in running process executions, thus allowing a process analyst to promptly respond to such violations. In this work, we present a predictive process monitoring technique based on Subjective Logic. Compared to previous work on predictive monitoring, our approach allows to easily customize both the reliability and sensitivity of the predictive system. We evaluate our approach on synthetic data, also comparing it with previous work

    Predicting Critical Behaviors in Business Process Executions: When Evidence Counts

    No full text
    Organizations need to monitor the execution of their processes to ensure they comply with a set of constraints derived, e.g., by internal managerial choices or by external legal requirements. However, preventive systems that enforce users to adhere to the prescribed behavior are often too rigid for real-world processes, where users might need to deviate to react to unpredictable circumstances. An effective strategy for reducing the risks associated with those deviations is to predict whether undesired behaviors will occur in running process executions, thus allowing a process analyst to promptly respond to such violations. In this work, we present a predictive process monitoring technique based on Subjective Logic. Compared to previous work on predictive monitoring, our approach allows to easily customize both the reliability and sensitivity of the predictive system. We evaluate our approach on synthetic data, also comparing it with previous work

    Explainability in Predictive Process Monitoring: When Understanding Helps Improving

    No full text
    none3Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.noneWilliams Rizzi; Chiara Di Francescomarino; Fabrizio Maria MaggiRizzi, Williams; Di Francescomarino, Chiara; Maria Maggi, Fabrizi

    Predictive business process monitoring with LSTM neural networks

    Get PDF
    Predictive business process monitoring methods exploit logs of completed cases of a process in order to make predictions about running cases thereof. Existing methods in this space are tailor-made for specific prediction tasks. Moreover, their relative accuracy is highly sensitive to the dataset at hand, thus requiring users to engage in trial-and-error and tuning when applying them in a specific setting. This paper investigates Long Short-Term Memory (LSTM) neural networks as an approach to build consistently accurate models for a wide range of predictive process monitoring tasks. First, we show that LSTMs outperform existing techniques to predict the next event of a running case and its timestamp. Next, we show how to use models for predicting the next task in order to predict the full continuation of a running case. Finally, we apply the same approach to predict the remaining time, and show that this approach outperforms existing tailor-made methods

    An Eye into the Future: Leveraging A-priori Knowledge in Predictive Business Process Monitoring

    No full text
    Predictive business process monitoring aims at leveraging past process execution data to predict how ongoing (uncompleted) process executions will unfold up to their completion. Nevertheless, cases exist in which, together with past execution data, some additional knowledge (a-priori knowledge) about how a process execution will develop in the future is available. This knowledge about the future can be leveraged for improving the quality of the predictions of events that are currently unknown. In this paper, we present two techniques - based on Recurrent Neural Networks with Long Short-Term Memory (LSTM) cells - able to leverage knowledge about the structure of the process execution traces as well as a-priori knowledge about how they will unfold in the future for predicting the sequence of future activities of ongoing process executions. The results obtained by applying these techniques on six real-life logs show an improvement in terms of accuracy over a plain LSTM-based baseline

    Intra and Inter-case Features in Predictive Process Monitoring: A Tale of Two Dimensions

    No full text
    Predictive process monitoring is concerned with predicting measures of interest for a running case (e.g., a business outcome or the remaining time) based on historical event logs. Most of the current predictive process monitoring approaches only consider intra-case information that comes from the case whose measures of interest one wishes to predict. However, in many systems, the outcome of a running case depends on the interplay of all cases that are being executed concurrently. For example, in many situations, running cases compete over scarce resources. In this paper, following standard predictive process monitoring approaches, we employ supervised machine learning for prediction. In particular, we present a method for feature encoding of process cases that relies on a bi-dimensional state space representation: the first dimension corresponds to intra-case dependencies, while the second dimension reflects inter-case dependencies to represent shared information among running cases. The inter-case encoding derives features based on the notion of case types that can be used to partition the event log into clusters of cases that share common characteristics. To demonstrate the usefulness and applicability of the method, we evaluated it against two real-life datasets coming from an Israeli emergency department process, and an open dataset of a manufacturing process
    corecore